翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Stein's paradox : ウィキペディア英語版
Stein's example
Stein's example (or phenomenon or paradox), in decision theory and estimation theory, is the phenomenon that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955.
An intuitive explanation is that optimizing for the mean-squared error of a ''combined'' estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent; this occurs in channel estimation in telecommunications, for instance (different factors affect overall channel performance). On the other hand, if one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse.
== Formal statement ==

The following is perhaps the simplest form of the paradox, the special case in which the number of observations is equal to (rather than greater than) the number of parameters to be estimated. Let ''θ'' be a vector consisting of ''n'' ≥ 3 unknown parameters. To estimate these parameters, a single measurement ''X''''i'' is performed for each parameter ''θ''''i'', resulting in a vector X of length ''n''. Suppose the measurements are independent, Gaussian random variables, with mean ''θ'' and variance 1, i.e.,
: \sim N(, 1). \,
Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate.
Under such conditions, it is most intuitive (and most common) to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written as
:\hat = . \,
The quality of such an estimator is measured by its risk function. A commonly used risk function is the mean squared error, defined as
:\operatorname \left(\| - \hat \|^2 \right ).
Surprisingly, it turns out that the "ordinary" estimator proposed above is suboptimal in terms of mean squared error when ''n'' ≥ 3. In other words, in the setting discussed here, there exist alternative estimators which ''always'' achieve lower ''mean'' squared error, no matter what the value of is.
For a given θ one could obviously define a perfect "estimator" which is always just θ, but this estimator would be bad for other values of θ. The estimators of Stein's paradox are, for a given θ, better than ''X'' for some values of ''X'' but necessarily worse for others (except perhaps for one particular θ vector, for which the new estimate is always better than ''X''). It is only on average that they are better.
More accurately, an estimator \hat _1 is said to dominate another estimator \hat _2 if, for all values of , the risk of \hat _1 is lower than, or equal to, the risk of \hat _2, ''and'' if the inequality is strict for some . An estimator is said to be admissible if no other estimator dominates it, otherwise it is ''inadmissible''. Thus, Stein's example can be simply stated as follows: ''The ordinary decision rule for estimating the mean of a multivariate Gaussian distribution is inadmissible under mean squared error risk.''
Many simple, practical estimators achieve better performance than the ordinary estimator. The best-known example is the James–Stein estimator, which works by starting at ''X'' and moving towards a particular point (such as the origin) by an amount inversely proportional to the distance of ''X'' from that point.
For a sketch of the proof of this result, see Proof of Stein's example.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Stein's example」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.